docs: add benchmarking blog posts and performance reference page#254
docs: add benchmarking blog posts and performance reference page#254SamBarker wants to merge 7 commits into
Conversation
Covers methodology, test environment, passthrough proxy results, encryption latency and throughput ceiling, the per-connection scaling insight, and sizing guidance. Includes a TODO placeholder for the connection sweep results before publication. Assisted-by: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Sam Barker <sam@quadrocket.co.uk>
Covers why we chose OMB over Kafka's own tools, the benchmark harness we built (Helm chart, orchestration scripts, JBang result processors), workload design rationale, CPU flamegraphs with embedded interactive iframes, the per-connection ceiling discovery, bugs found in our own tooling, and the cluster recovery incident. Assisted-by: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Sam Barker <sam@quadrocket.co.uk>
Adds /performance/ as a dedicated quick-reference page with headline benchmark numbers, comparison tables, and sizing guidance, linked from both blog posts. Updates the existing Performance section in overview.markdown with the key headline numbers and a link to the full reference page. Assisted-by: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Sam Barker <sam@quadrocket.co.uk>
| | Kroxylicious proxy | 1.4% | | ||
| | GC | 0.1% | | ||
|
|
||
| The proxy is overwhelmingly I/O-bound. 59% of CPU is in `send`/`recv` syscalls — the inherent cost of maintaining two TCP connections (client→proxy, proxy→Kafka) with data flowing through the JVM. The proxy itself accounts for 1.4%. It really is a TCP relay with protocol awareness. |
There was a problem hiding this comment.
I wonder how much that's down to the decode predicate thing -- basically we know the filter chain, and what each filter in it wants to intercept, and I think we avoid doing the request/response decoding when we know nothing is interested. That was code that was in there from the beginning, but I don't actually know how relevant it is -- maybe some of the internal filters mean we're decoding requests and response always, in which case 1.4% is impressive. Or maybe we're acting more like a L4 proxy most of the time, in which case 1.4% is not quite as impressive.
There was a problem hiding this comment.
Great question — this is actually a stronger story than the original prose suggested. The default infrastructure filters (BrokerAddressFilter, TopicNameCacheFilter, ApiVersionsIntersect) are doing genuine L7 work: metadata, FindCoordinator, and API version exchanges are fully decoded for address rewriting and version negotiation. But the high-volume produce/consume traffic hits the decode predicate and passes through without full deserialisation. So the proxy is selectively L7 — real protocol awareness where it needs it, L4-like passthrough on the hot path. The 1.4% is the cost of that design, and it validates it. Updating the prose to make this explicit.
|
|
||
| The direct crypto cost is 13.3% (11.3% AES-GCM + 2.0% Kroxylicious filter logic). But encryption adds indirect costs too: | ||
|
|
||
| - **Buffer management (+5.8%)**: encrypted records need to be read into buffers, encrypted, and written to new buffers — more allocation, more copying |
There was a problem hiding this comment.
Did we ever figure out how to reuse the buffers more? I think that was a TODO at one point.
There was a problem hiding this comment.
Correct — the TODO was never addressed. A BufferPool class existed at one point but was deleted as unused in early 2024. Cipher instances are still created fresh per operation. These remain genuine open optimisation opportunities.
…aming - Shift publication dates to May 21 and May 28 - Replace speculative per-connection ceiling explanation with empirical finding: encryption throughput ceiling scales linearly with CPU budget (validated at 1000m, 2000m, 4000m) - Add sizing formula: CPU (mc) = 20 × produce_MB_per_s, with worked example - Add RF=3 masking caveat: initial 1-topic sweeps conflated Kafka replication ceiling with proxy CPU ceiling; coefficient derived from RF=1 multi-topic workloads - Post 2: add full investigation narrative — workload isolation approach, coefficient derivation, 4-core confirmation, and 2-core prediction/validation - Drop stale "future work" items that are now complete Assisted-by: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Sam Barker <sam@quadrocket.co.uk>
The proxy is selectively L7: default infrastructure filters do genuine Kafka protocol work (address rewriting, API version negotiation, metadata caching) while high-volume produce/consume traffic bypasses full deserialisation via the decode predicate. The 1.4% proxy CPU share validates this design, not just reflects it. Also drop the Fyre cluster upgrade section — OCP-internal incident with no relevance to readers. Assisted-by: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Sam Barker <sam@quadrocket.co.uk>
- Warm up test environment intro: realistic deployment framing - Add conversational lead-in to sizing guidance in both documents - Improve caveats opener in Post 1 - Add caveats section to performance page (RF=3 masking, message size, horizontal scaling) Assisted-by: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Sam Barker <sam@quadrocket.co.uk>
Signed-off-by: Sam Barker <sam@quadrocket.co.uk>
Summary
/performance/reference page summarising key numbers and linking to both postsoverview.markdownwith headline performance figures and a link to the reference pageStatus
Draft — the posts are first drafts. Known open items:
Test plan
./run.shand verify site renders athttp://127.0.0.1:4000//performance/page renders with correct tables/performance/work🤖 Generated with Claude Code